On ɛ-optimal strategies in discounted Markov games
نویسندگان
چکیده
منابع مشابه
Structural approximations in discounted semi-Markov games
We consider the problem of approximating the values and the equilibria in two-person zero-sum discounted semi-Markov games with in nite horizon and compact action spaces, when several uncertainties are present about the parameters of the model. Speci cally: on the one hand, we study approximations made on the transition probabilities, the discount factor and the reward functions when the state ...
متن کاملSampling Techniques for Markov Games Approximation Results on Sampling Techniques for Zero-sum, Discounted Markov Games
We extend the “policy rollout” sampling technique for Markov decision processes to Markov games, and provide an approximation result guaranteeing that the resulting sampling-based policy is closer to the Nash equilibrium than the underlying base policy. This improvement is achieved with an amount of sampling that is independent of the state-space size. We base our approximation result on a more...
متن کاملExistence of optimal strategies in Markov games with incomplete information
The existence of a value and optimal strategies is proved for the class of twoperson repeated games where the state follows a Markov chain independently of players’ actions and at the beginning of each stage only player one is informed about the state. The results apply to the case of standard signaling where players’ stage actions are observable, as well as to the model with general signals pr...
متن کاملStationary Markov perfect equilibria in discounted stochastic games
The existence of stationary Markov perfect equilibria in stochastic games is shown under a general condition called “(decomposable) coarser transition kernels”. This result covers various earlier existence results on correlated equilibria, noisy stochastic games, stochastic games with finite actions and stateindependent transitions, and stochastic games with mixtures of constant transition kern...
متن کاملSampling Techniques for Zero-sum, Discounted Markov Games
In this paper, we first present a key approximation result for zero-sum, discounted Markov games, providing bounds on the state-wise loss and the loss in the sup norm resulting from using approximate Q-functions. Then we extend the policy rollout technique for MDPs to Markov games. Using our key approximation result, we prove that, under certain conditions, the rollout technique gives rise to a...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Banach Center Publications
سال: 1985
ISSN: 0137-6934,1730-6299
DOI: 10.4064/-14-1-263-276